The following content has been provided by the University of Erlangen-Nürnberg.
So today I'm going to talk about the design of an application-specific processor where
we modified the instruction set in order to support very advanced algorithms for real-time
medical monitoring applications.
So the motivation is a condition called pericardial tamponade in which the sac around the heart
starts to fill up with fluid and this impedes the ability of the heart to properly plump
blood.
This is a very good motivating example for real-time medical monitoring because this
condition is 100% fatal if left untreated, yet if you can detect it, it is very, very
easy for a doctor to treat. You simply puncture the outer membrane of the heart with a needle
and you drain the fluid out and everything is fine after that.
Now the problem is that this can be detected and treated easily for a patient under doctor's
care. So for example, this might happen due to a small mistake made during surgery, the
doctor accidentally cuts something, the fluid starts to flow. So in that context it's very,
very easy to quickly detect and quickly fix.
But this is also something that can just happen to a person if they're walking around. And
typically if this starts happening and a doctor doesn't know and you fall down and an ambulance
is called and you get taken to the hospital and they start diagnosing you, you have a
much higher likelihood of dying because this is a real-time problem. You have to detect
that it occurs and get a needle in there fast enough. And if people don't know how to look
for this, it can be very hard.
So what we want to do is to design a wearable computing device that can analyze signals
from the human heart and the human respiration system that can detect very quickly this and
many other conditions that could be potentially harmful. So it turns out that under normal
operating conditions, your heart and your respiratory system operate completely independently.
They do not affect one another. But when tamponade tech starts to occur, the pulse and the respiration
interferes with the pulse. Now this is fairly easy to detect if you understand what you're
looking for in terms of signal processing and you can easily design a computing system
to do this. But figuring out and analyzing this in real time is very computationally
intensive. So that's what we're trying to do. We're trying to create a low-cost hardware
solution that will enable us to monitor for this condition in real time.
The problem that we're looking at is a general term called time series monitoring or in the
database community, similarity search. We are looking for things that are similar. So
things that are similar. We have template patterns that will allow us to detect this
pulses paradoxes condition and we need to be able to look at an ongoing signal being
read from a person and you can say how similar is this to a syndrome that we're looking for.
Of course, finding similar things is a much more general problem in data mining. So many
techniques that have been designed for things like classification of language or similarity
of images, different aspects of these algorithms can be applied in order to help find similarities
in time series data which is what's shown up top. So I'm going to be talking primarily
today about looking at time series similarity but many of the techniques can actually be
generalized and extended to do different types of similarity searches.
I'm going to talk primarily about two similarity measures. The first is very simple. It is
called Euclidean distance. You take your two curves and you simply align all of the points
in time and you take the difference in measuring them and then you actually sum the squares
of the distances and take a square root at the end. So this is very simple, linear to
compute. The problem with Euclidean distance is that it cannot handle situations where
the curves are in fact quite similar to one another but have small shifts in time. So
you could argue that there is two forms of similarity here. First, each of these curves
has four bumps. Second, if you look at the two bumps together are spaced very similarly
Presenters
Prof. Dr. Philip Brisk
Zugänglich über
Offener Zugang
Dauer
00:59:44 Min
Aufnahmedatum
2014-03-21
Hochgeladen am
2014-03-28 14:55:29
Sprache
de-DE
Prof. Philip Brisk (University of California, Riverside)
The last decade has seen significant advances in creating bedside monitoring algorithms for a host of medical conditions;however surprisingly few of these algorithms have seen deployment in wearable devices. The obvious difficulty is theavailability of computational resources on a device that is small enough to be convenient and unobtrusive. The computationalresource gap between conventional systems and wearable devices can be partly bridged by optimizing the algorithms(admissible pruning, early abandoning, indexing, etc.), but increasingly sophisticated monitoring algorithms have producedan arms race that is outpacing the performance and energy capabilities of the hardware community. Within this context,application- and domain-specialization are ultimately necessary in order to achieve the highest possible efficiency forwearable computing platforms.
Medical monitoring is a specialized form of time series data mining. Most time series data mining algorithms require similaritycomparisons as a subroutine, and there is increasing evidence that the Dynamic Time Warping (DTW) measure outperformsthe competition in most domains, including medical monitoring. In addition to medical monitoring, DTW has been used indiverse domains such as robotics, medicine, biometrics, music/speech processing, climatology, aviation, gesture recognition,user interfaces, industrial processing, cryptanalysis, mining of historical manuscripts, geology, astronomy, space exploration,wildlife monitoring, and many others. Despite its ubiquity, DTW remains too computationally intensive for use in real-timeapplications because its core is a dynamic programming algorithm that has a quadratic time complexity; however, recentalgorithmic optimizations have enabled DTW to achieve near-constant amortized time when processing time series databasescontaining trillions of elements.
As further software optimization appears unlikely to yield any further improvements, attention must be turned to hardwarespecialization. This talk will present the design, implementation, and evaluation of an application-specific processor whoseinstruction set has been customized to accelerate a software-optimized implementation of DTW. Compared to a 32-bitembedded processor, our design yields a 4.87x improvement in performance and a 78% reduction in energy consumptionwhen prototyped on a Xilinx EK-V6-ML605-G Virtex 6 FPGA.